128 research outputs found

    On the Modifications of a Broyden's Single Parameter Rank-Two Quasi-Newton Method for Unconstrained Minimization

    Get PDF
    The thesis concerns mainly in finding the numerical solution of non-linear unconstrained problems. We consider a well-known class of optimization methods called the quasi-Newton methods, or variable metric methods. In particular, a class of quasi-Newton method named Broyden's single parameter rank two method is focussed. We also investigate the global convergence properties for some step-length procedures. Immediately from the investigations, a global convergence proof of the Armijo quasi-Newton method is given. Some preliminary modifications and numerical experiments are carried out to gain useful numerical experiences for the improvements of the quasi-Ne"-'ton updates.We then derived two improvement techniques: the first we employ a switching criteria between quasi-Newton Broyden-Fletcher-Goldfrab-Shanno or BFGS and steepest descent direction and in the second we introduce a reduced trace-norm condition BFGS update. The thesis includes results illustrating the numerical performance of the modified methods on a chosen set of test problems. Limitations and some possible extensions are also given to conclude this thesis

    A class of diagonal quasi-newton methods for large-scale convex minimization

    Get PDF
    We study the convergence properties of a class of low memory methods for solving large-scale unconstrained problems. This class of methods belongs to that of quasi-Newton family, except for which the approximation to Hessian, at each step, is updated by means of a diagonal matrix. Using appropriate scaling, we show that the methods can be implemented so as to be globally and RR -linearly convergent with standard inexact line searches. Preliminary numerical results suggest that the methods are good alternative to other low memory methods such as the CG and spectral gradient methods

    Modified Quasi-Newton Methods For Large-Scale Unconstrained Optimization

    Get PDF
    The focus of this thesis is on finding the unconstrained minimizer of a function, when the dimension n is large. Specifically, we will focus on the wellknown class of optimization methods called the quasi-Newton methods. First we briefly give some mathematical background. Then we discuss the quasi-Newton's methods, the fundamental method in underlying most approaches to the problems of large-scale unconstrained optimization, as well as the related so-called line search methods. A review of the optimization methods currently available that can be used to solve large-scale problems is also given. The mam practical deficiency of quasi-Newton methods is the high computational cost for search directions, which is the key issue in large-scale unconstrained optimization. Due to the presence of this deficiency, we introduce a variety of techniques for improving the quasi-Newton methods for large-scale problems, including scaling the SR1 update, matrix-storage free methods and the extension of modified BFGS updates to limited-memory scheme. Comprehensive theoretical and experimental results are also given. Finally we comment on some achievements in our researches. Possible extensions are also given to conclude this thesis

    Modifications of the Limited Memory BFGS Algorithm for Large-scale Nonlinear Optimization

    Get PDF
    In this paper we present two new numerical methods for unconstrained large-scale optimization. These methods apply update formulae, which are derived by considering different techniques of approximating the objective function. Theoretical analysis is given to show the advantages of using these update formulae. It is observed that these update formulae can be employed within the framework of limited memory strategy with only a modest increase in the linear algebra cost. Comparative results with limited memory BFGS (L-BFGS) method are presented.</p

    A multiperson pursuit problem on a closed convex set in Hilbert space

    Get PDF
    A differential game of pursuit of an evader by a finite number of pursuers on a closed convex set in l 2 -space is studied. The game is described by simple differential equations and players’ controls obeyed the integral constraints. The game is deemed to be completed if exact contact of a pursuer with the evader is occured. It is shown that even if the resources for controls of an individual pursuer is less than that of the evader, the completion of game is still possible

    An improved multi-step gradient-type method for large scale optimization

    Get PDF
    In this paper, we propose an improved multi-step diagonal updating method for large scale unconstrained optimization. Our approach is based on constructing a new gradient-type method by means of interpolating curves. We measure the distances required to parameterize the interpolating polynomials via a norm defined by a positive-definite matrix. By developing on implicit updating approach we can obtain an improved version of Hessian approximation in diagonal matrix form, while avoiding the computational expenses of actually calculating the improved version of the approximation matrix. The effectiveness of our proposed method is evaluated by means of computational comparison with the BB method and its variants. We show that our method is globally convergent and only requires O(n) memory allocations

    Dynamic robust stabilization of stochastic differential control systems

    Get PDF
    The stabilization of stochastic differential control systems by means of dynamic feedback laws is provided. We extend the well-known Artstein–Sontag theorem to derive the necessary and sufficient conditions for the dynamic robust stabilization of stochastic differential systems. An explicit formula for feedback law exhibiting dynamic robust stability in probability is established, and a numerical example is also given to illustrate our results

    Positive-definite memoryless symmetric rank one method for large-scale unconstrained optimization

    Get PDF
    Memoryless quasi-Newton method is exactly the quasi-Newton method for which the approximation to the inverse of Hessian, at each step, is updated from a positive multiple of identity matrix. Hence, its search direction can be computed without the storage of matrices, namely O(n2) storages. in this paper, a memoryless symmetric rank one (SR1) method for solving large-scale unconstrained optimization problems is presented. The basic idea is to incorporate the SR1 update within the framework of the memoryless quasi-method. However, it is well-known that the SR1 update may not preserve positive definiteness even when updated from the positive definite matrix. Therefore, we propose that the memoryless SR1 method is updated from the positive scaled of the identity, in which the scaling factor is derived in such a way to preserve the positive definiteness and improves the condition the scale memoryless SR1 update. Under some standard conditions it is shown that the method is globally and R-linearly convergent. Numerical results show that the memoryless SR1 method is very encouraging

    Scaled memoryless BFGS preconditioned steepest descent method for very large-scale unconstrained optimization

    Get PDF
    A preconditioned steepest descent (SD) method for solving very large (with dimensions up to 106 ) unconstrained optimization problems is presented. The basic idea is to incorpo1 rate the preconditioning technique in the framework of the SD method. The preconditioner, which is also a scaled memoryless BFGS updating matrix is selected despite the oftenly scaling strategy on SD method. Then the scaled memoryless BFGS preconditioned SD direction can be computed without any additional storage compared with a standard scaled SD direction. In very mild conditions it is shown that, for uniformly convex functions, the method is globally and linearly convergent. Numerical results are also given to illustrate the use of such preconditioning within the SD method. Our numerical study shows that the new proposed preconditioned SD method is significantly outperformed the SD method with Oren-Luenberger scaling and the conjugate gradient method, and comparable to the limited memory BFGS method

    Convergence and stability of line search methods for unconstrained optimization.

    Get PDF
    This paper explores the stability of general line search methods in the sense of Lyapunov, for minimizing a smooth nonlinear function. In particular we give sufficient conditions for a line search method to be globally asymptotical stable. Our analysis suggests that the proposed sufficient conditions for asymptotical stability is equivalent to the Zoutendijk-type conditions in conventional global convergence analysis
    corecore